1,346 research outputs found
Noise resilient quantum interface based on QND interaction
We propose a quantum interface protocol based on two quantum-non-demolition
interactions (QND) arranged either in sequence or in parallel. Since the QND
coupling arises naturally in interactions between light and a macroscopic
ensemble of atoms, or between light and a micro-mechanical oscillator, the
proposed interface is capable of transferring a state of light onto these
matter systems. The transfer itself is perfect and deterministic for any
quantum state, for arbitrarily small interaction strengths, and for arbitrarily
large noise of the target system. It requires an all-optical pre-processing,
requiring a coupling stronger than that between the light and the matter, and a
displacement feed-forward correction of the matter system. We also suggest a
probabilistic version of the interface, which eliminates the need for the
feed-forward correction at a cost of reduced success rate. An application of
the interface can be found in construction of a quantum memory, or in the state
preparation for quantum sensing.Comment: 6 pages, 5 figure
Coherent-state phase concentration by quantum probabilistic amplification
We propose novel coherent-state phase concentration by probabilistic
measurement-induced ampli- fication. The amplification scheme uses novel
architecture, thermal noise addition (instead of single photon addition)
followed by feasible multiple photon subtraction using realistic photon-number
resolving detector. It allows to substantially amplify weak coherent states and
simultaneously reduce their phase uncertainty, contrary to the deterministic
amplifier
Wavelet-based Adaptive Techniques Applied to Turbulent Hypersonic Scramjet Intake Flows
The simulation of hypersonic flows is computationally demanding due to large
gradients of the flow variables caused by strong shock waves and thick boundary
or shear layers. The resolution of those gradients imposes the use of extremely
small cells in the respective regions. Taking turbulence into account
intensives the variation in scales even more. Furthermore, hypersonic flows
have been shown to be extremely grid sensitive. For the simulation of
three-dimensional configurations of engineering applications, this results in a
huge amount of cells and prohibitive computational time. Therefore, modern
adaptive techniques can provide a gain with respect to computational costs and
accuracy, allowing the generation of locally highly resolved flow regions where
they are needed and retaining an otherwise smooth distribution. An h-adaptive
technique based on wavelets is employed for the solution of hypersonic flows.
The compressible Reynolds averaged Navier-Stokes equations are solved using a
differential Reynolds stress turbulence model, well suited to predict
shock-wave-boundary-layer interactions in high enthalpy flows. Two test cases
are considered: a compression corner and a scramjet intake. The compression
corner is a classical test case in hypersonic flow investigations because it
poses a shock-wave-turbulent-boundary-layer interaction problem. The adaptive
procedure is applied to a two-dimensional confguration as validation. The
scramjet intake is firstly computed in two dimensions. Subsequently a
three-dimensional geometry is considered. Both test cases are validated with
experimental data and compared to non-adaptive computations. The results show
that the use of an adaptive technique for hypersonic turbulent flows at high
enthalpy conditions can strongly improve the performance in terms of memory and
CPU time while at the same time maintaining the required accuracy of the
results.Comment: 26 pages, 29 Figures, submitted to AIAA Journa
Vacuum as a less hostile environment to entanglement
We derive sufficient conditions for infinite-dimensional systems whose
entanglement is not completely lost in a finite time during its decoherence by
a passive interaction with local vacuum environments. The sufficient conditions
allow us to clarify a class of bipartite entangled states which preserve their
entanglement or, in other words, are tolerant against decoherence in a vacuum.
We also discuss such a class for entangled qubits.Comment: Replaced by the published versio
NLO merging in tt+jets
In this talk the application of the recently introduced methods to merge NLO
calculations of successive jet multiplicities to the production of top pairs in
association with jets will be discussed, in particular a fresh look is taken at
the top quark forward-backward asymmetries. Emphasis will be put on the
achieved theoretical accuracy and the associated perturbative and
non-perturbative error estimates.Comment: 6 pages, 3 figures, proceedings contribution for EPS 2013, Stockholm,
17-24 Jul
A lead-width distribution for Antarctic sea ice: a case study for the Weddell Sea with high-resolution Sentinel-2 images
Using Copernicus Sentinel-2 images we derive a statistical lead-width distribution for the Weddell Sea. While previous work focused on the Arctic, this is the first lead-width distribution for Antarctic sea ice. Previous studies suggest that the lead-width distribution follows a power law with a positive exponent; however their results for the power-law exponents are not all in agreement with each other.
To detect leads we create a sea-ice surface-type classification based on 20 carefully selected cloud-free Sentinel-2 Level-1C products, which have a resolution of 10 m. The observed time period is from November 2016 until February 2018, covering only the months from November to April. We apply two different fitting methods to the measured lead widths. The first fitting method is a linear fit, while the second method is based on a maximum likelihood approach. Here, we use both methods for the same lead-width data set to observe differences in the calculated power-law exponent.
To further investigate influences on the power-law exponent, we define two different thresholds: one for open-water-covered leads and one for open-water-covered and nilas-covered leads. The influence of the lead threshold on the exponent is larger for the linear fit than for the method based on the maximum likelihood approach. We show that the exponent of the lead-width distribution ranges between 1.110 and 1.413 depending on the applied fitting method and lead threshold. This exponent for the Weddell Sea sea ice is smaller than the previously observed exponents for the Arctic sea ice.</p
Fault Models for Quantum Mechanical Switching Networks
The difference between faults and errors is that, unlike faults, errors can
be corrected using control codes. In classical test and verification one
develops a test set separating a correct circuit from a circuit containing any
considered fault. Classical faults are modelled at the logical level by fault
models that act on classical states. The stuck fault model, thought of as a
lead connected to a power rail or to a ground, is most typically considered. A
classical test set complete for the stuck fault model propagates both binary
basis states, 0 and 1, through all nodes in a network and is known to detect
many physical faults. A classical test set complete for the stuck fault model
allows all circuit nodes to be completely tested and verifies the function of
many gates. It is natural to ask if one may adapt any of the known classical
methods to test quantum circuits. Of course, classical fault models do not
capture all the logical failures found in quantum circuits. The first obstacle
faced when using methods from classical test is developing a set of realistic
quantum-logical fault models. Developing fault models to abstract the test
problem away from the device level motivated our study. Several results are
established. First, we describe typical modes of failure present in the
physical design of quantum circuits. From this we develop fault models for
quantum binary circuits that enable testing at the logical level. The
application of these fault models is shown by adapting the classical test set
generation technique known as constructing a fault table to generate quantum
test sets. A test set developed using this method is shown to detect each of
the considered faults.Comment: (almost) Forgotten rewrite from 200
Ageing analysis of the giant radio galaxy J1343+3758
Deep 4860 and 8350Â MHz observations with the VLA and 100-m Effelsberg telescopes,
supplementing available radio survey maps at the frequencies of 327Â MHz (WENSS
survey) and 1400Â MHz (NVSS survey), are used to study the synchrotron spectra and
radiative ages of relativistic particles in opposite lobes of the giant
radio galaxy J1343+3758 (Machalski & Jamrozy [CITE]). The classical spectral ageing
analysis (e.g. Myers & Spangler [CITE]) with assumption of equipartition magnetic
fields gives a mean separation velocity () of
about 0.16 c and 0.12 c measured with respect to the emitting plasma, and
suggests a maximum particle age of about 48 and 50 Myr in the NE and SW lobes,
respectively. On the contrary, a mean jet-head advanc
- …